5,715 research outputs found

    Effect of Statistical Fluctuation in Monte Carlo Based Photon Beam Dose Calculation on Gamma Index Evaluation

    Full text link
    The gamma-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the gamma-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate gamma-index values when existing in the reference dose distribution and underestimate gamma-index values when existing in the evaluation dose distribution given the original gamma-index is relatively large for the statistical fluctuation. Our numerical experiments using clinical photon radiation therapy cases have shown that 1) when performing a gamma-index test between an MC reference dose and a non-MC evaluation dose, the average gamma-index is overestimated and the passing rate decreases with the increase of the noise level in the reference dose; 2) when performing a gamma-index test between a non-MC reference dose and an MC evaluation dose, the average gamma-index is underestimated when they are within the clinically relevant range and the passing rate increases with the increase of the noise level in the evaluation dose; 3) when performing a gamma-index test between an MC reference dose and an MC evaluation dose, the passing rate is overestimated due to the noise in the evaluation dose and underestimated due to the noise in the reference dose. We conclude that the gamma-index test should be used with caution when comparing dose distributions computed with Monte Carlo simulation

    Polarized linewidth-controllable double-trapping electromagnetically induced transparency spectra in a resonant plasmon nanocavity

    Get PDF
    Surface plasmons with ultrasmall optical mode volume and strong near field enhancement can be used to realize nanoscale light-matter interaction. Combining surface plasmons with the quantum system provides the possibility of nanoscale realization of important quantum optical phenomena, including the electromagnetically induced transparency (EIT), which has many applications in nonlinear quantum optics and quantum information processing. Here, using a custom-designed resonant plasmon nanocavity, we demonstrate polarized position-dependent linewidth-controllable EIT spectra at the nanoscale. We analytically obtain the double coherent population trapping conditions in a double-L quantum system with crossing damping, which give two transparent points in the EIT spectra. The linewidths of the three peaks are extremely sensitive to the level spacing of the excited states, the Rabi frequencies and detunings of pump fields, and the Purcell factors. In particular the linewidth of the central peak is exceptionally narrow. The hybrid system may have potential applications in ultra-compact plasmon-quantum devices.http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000325349300008&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=8e1609b174ce4e31116a60747a720701Multidisciplinary SciencesSCI(E)PubMed11ARTICLE2879

    Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy

    Full text link
    Purpose: To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Methods: Given a set of volumetric images of a patient at N breathing phases as the training data, we perform deformable image registration between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, we can generate new DVFs, which, when applied on the reference image, lead to new volumetric images. We then can reconstruct a volumetric image from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. Our algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. We generated the training data using a realistic and dynamic mathematical phantom with 10 breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. Results: The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 mm +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 seconds (range: 0.17 and 0.35 seconds). Conclusions: We have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.Comment: 8 pages, 3 figures, submitted to Medical Physics Lette

    Is ChatGPT a Good Multi-Party Conversation Solver?

    Full text link
    Large Language Models (LLMs) have emerged as influential instruments within the realm of natural language processing; nevertheless, their capacity to handle multi-party conversations (MPCs) -- a scenario marked by the presence of multiple interlocutors involved in intricate information exchanges -- remains uncharted. In this paper, we delve into the potential of generative LLMs such as ChatGPT and GPT-4 within the context of MPCs. An empirical analysis is conducted to assess the zero-shot learning capabilities of ChatGPT and GPT-4 by subjecting them to evaluation across three MPC datasets that encompass five representative tasks. The findings reveal that ChatGPT's performance on a number of evaluated MPC tasks leaves much to be desired, whilst GPT-4's results portend a promising future. Additionally, we endeavor to bolster performance through the incorporation of MPC structures, encompassing both speaker and addressee architecture. This study provides an exhaustive evaluation and analysis of applying generative LLMs to MPCs, casting a light upon the conception and creation of increasingly effective and robust MPC agents. Concurrently, this work underscores the challenges implicit in the utilization of LLMs for MPCs, such as deciphering graphical information flows and generating stylistically consistent responses.Comment: Accepted by Findings of EMNLP 202

    DiffuSIA: A Spiral Interaction Architecture for Encoder-Decoder Text Diffusion

    Full text link
    Diffusion models have emerged as the new state-of-the-art family of deep generative models, and their promising potentials for text generation have recently attracted increasing attention. Existing studies mostly adopt a single encoder architecture with partially noising processes for conditional text generation, but its degree of flexibility for conditional modeling is limited. In fact, the encoder-decoder architecture is naturally more flexible for its detachable encoder and decoder modules, which is extensible to multilingual and multimodal generation tasks for conditions and target texts. However, the encoding process of conditional texts lacks the understanding of target texts. To this end, a spiral interaction architecture for encoder-decoder text diffusion (DiffuSIA) is proposed. Concretely, the conditional information from encoder is designed to be captured by the diffusion decoder, while the target information from decoder is designed to be captured by the conditional encoder. These two types of information flow run through multilayer interaction spirally for deep fusion and understanding. DiffuSIA is evaluated on four text generation tasks, including paraphrase, text simplification, question generation, and open-domain dialogue generation. Experimental results show that DiffuSIA achieves competitive performance among previous methods on all four tasks, demonstrating the effectiveness and generalization ability of the proposed method.Comment: Work in Progres

    Dirac Fermion in Strongly-Bound Graphene Systems

    Get PDF
    It is highly desirable to integrate graphene into existing semiconductor technology, where the combined system is thermodynamically stable yet maintain a Dirac cone at the Fermi level. Firstprinciples calculations reveal that a certain transition metal (TM) intercalated graphene/SiC(0001), such as the strongly-bound graphene/intercalated-Mn/SiC, could be such a system. Different from free-standing graphene, the hybridization between graphene and Mn/SiC leads to the formation of a dispersive Dirac cone of primarily TM d characters. The corresponding Dirac spectrum is still isotropic, and the transport behavior is nearly identical to that of free-standing graphene for a bias as large as 0.6 V, except that the Fermi velocity is half that of graphene. A simple model Hamiltonian is developed to qualitatively account for the physics of the transfer of the Dirac cone from a dispersive system (e.g., graphene) to an originally non-dispersive system (e.g., TM).Comment: Apr 25th, 2012 submitte
    • …
    corecore